Efficient Distributed Online Prediction and Stochastic Optimization With Approximate Distributed Averaging

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox

We present and analyze statistically optimal, communication and memory efficient distributed stochastic optimization algorithms with near-linear speedups (up to log-factors). This improves over prior work which includes methods with near-linear speedups but polynomial communication requirements (accelerated minibatch SGD) and communication efficient methods which do not exhibit any runtime spee...

متن کامل

Network Location Problem with Stochastic and Uniformly Distributed Demands

This paper investigates the network location problem for single-server facilities that are subject to congestion. In each network edge, customers are uniformly distributed along the edge and their requests for service are assumed to be generated according to a Poisson process. A number of facilities are to be selected from a number of candidate sites and a single server is located at each facil...

متن کامل

Robust Distributed Online Prediction

The standard model of online prediction deals with serial processing of inputs by a single processor. However, in large-scale online prediction problems, where inputs arrive at a high rate, an increasingly common necessity is to distribute the computation across several processors. A non-trivial challenge is to design distributed algorithms for online prediction, which maintain good regret guar...

متن کامل

Optimal Distributed Online Prediction

Online prediction methods are typically studied as serial algorithms running on a single processor. In this paper, we present the distributed mini-batch (DMB) framework, a method of converting a serial gradient-based online algorithm into a distributed algorithm, and prove an asymptotically optimal regret bound for smooth convex loss functions and stochastic examples. Our analysis explicitly ta...

متن کامل

Communication-Efficient Distributed Optimization using an Approximate Newton-type Method

We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evide...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Signal and Information Processing over Networks

سال: 2016

ISSN: 2373-776X,2373-7778

DOI: 10.1109/tsipn.2016.2620440